20 research outputs found
New complexity results and algorithms for min-max-min robust combinatorial optimization
In this work we investigate the min-max-min robust optimization problem
applied to combinatorial problems with uncertain cost-vectors which are
contained in a convex uncertainty set. The idea of the approach is to calculate
a set of k feasible solutions which are worst-case optimal if in each possible
scenario the best of the k solutions would be implemented. It is known that the
min-max-min robust problem can be solved efficiently if k is at least the
dimension of the problem, while it is theoretically and computationally hard if
k is small. While both cases are well studied in the literature nothing is
known about the intermediate case, namely if k is smaller than but close to the
dimension of the problem. We approach this open question and show that for a
selection of combinatorial problems the min-max-min problem can be solved
exactly and approximately in polynomial time if some problem specific values
are fixed. Furthermore we approach a second open question and present the first
implementable algorithm with oracle-pseudopolynomial runtime for the case that
k is at least the dimension of the problem. The algorithm is based on a
projected subgradient method where the projection problem is solved by the
classical Frank-Wolfe algorithm. Additionally we derive a branch & bound method
to solve the min-max-min problem for arbitrary values of k and perform tests on
knapsack and shortest path instances. The experiments show that despite its
theoretical impact the projected subgradient method cannot compete with an
already existing method. On the other hand the performance of the branch &
bound method scales very well with the number of solutions. Thus we are able to
solve instances where k is above some small threshold very efficiently
Min-Max-Min Robustness for Combinatorial Problems with Discrete Budgeted Uncertainty
We consider robust combinatorial optimization problems with cost uncertainty where the decision maker can prepare K solutions beforehand and chooses the best of them once the true cost is revealed. Also known as min-max-min robustness (a special case of K-adaptability), it is a viable alternative to otherwise intractable two-stage problems. The uncertainty set assumed in this paper considers that in any scenario, at most Γ of the components of the cost vectors will be higher than expected, which corresponds to the extreme points of the budgeted uncertainty set. While the classical min-max problem with budgeted uncertainty is essentially as easy as the underlying deterministic problem, it turns out that the min-max-min problem is N P-hard for many easy combinatorial optimization problems, and not approximable in general. We thus present an integer programming formulation for solving the problem through a row-and-column generation algorithm. While exact, this algorithm can only cope with small problems, so we present two additional heuristics leveraging the structure of budgeted uncertainty. We compare our row-and-column generation algorithm and our heuristics on knapsack and shortest path instances previously used in the scientific literature and find that the heuristics obtain good quality solutions in short computational times
Faster Algorithms for Min-max-min Robustness for Combinatorial Problems with Budgeted Uncertainty
International audienceWe consider robust combinatorial optimization problems where the decision maker can react to a scenario by choosing from a finite set of k solutions. This approach is appropriate for decision problems under uncertainty where the implementation of decisions requires preparing the ground. We focus on the case that the set of possible scenarios is described through a budgeted uncertainty set and provide three algorithms for the problem. The first algorithm solves heuristically the dualized problem, a non-convex mixed-integer non-linear program (MINLP), via an alternating optimization approach. The second algorithm solves the MINLP exactly for k = 2 through a dedicated spatial branch-and-bound algorithm. The third approach enumerates k-tuples, relying on strong bounds to avoid a complete enumeration. We test our methods on shortest path instances that were used in the previous literature and on randomly generated knapsack instances, and find that our methods considerably outperform previous approaches. Many instances that were previously not solved within hours can now be solved within few minutes, often even faster
Neur2RO: Neural Two-Stage Robust Optimization
Robust optimization provides a mathematical framework for modeling and
solving decision-making problems under worst-case uncertainty. This work
addresses two-stage robust optimization (2RO) problems (also called adjustable
robust optimization), wherein first-stage and second-stage decisions are made
before and after uncertainty is realized, respectively. This results in a
nested min-max-min optimization problem which is extremely challenging
computationally, especially when the decisions are discrete. We propose
Neur2RO, an efficient machine learning-driven instantiation of
column-and-constraint generation (CCG), a classical iterative algorithm for
2RO. Specifically, we learn to estimate the value function of the second-stage
problem via a novel neural network architecture that is easy to optimize over
by design. Embedding our neural network into CCG yields high-quality solutions
quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital
budgeting. For knapsack, Neur2RO finds solutions that are within roughly
of the best-known values in a few seconds compared to the three hours of the
state-of-the-art exact branch-and-price algorithm; for larger and more complex
instances, Neur2RO finds even better solutions. For capital budgeting, Neur2RO
outperforms three variants of the -adaptability algorithm, particularly on
the largest instances, with a 5 to 10-fold reduction in solution time. Our code
and data are available at https://github.com/khalil-research/Neur2RO
Finding Regions of Counterfactual Explanations via Robust Optimization
Counterfactual explanations play an important role in detecting bias and
improving the explainability of data-driven classification models. A
counterfactual explanation (CE) is a minimal perturbed data point for which the
decision of the model changes. Most of the existing methods can only provide
one CE, which may not be achievable for the user. In this work we derive an
iterative method to calculate robust CEs, i.e. CEs that remain valid even after
the features are slightly perturbed. To this end, our method provides a whole
region of CEs allowing the user to choose a suitable recourse to obtain a
desired outcome. We use algorithmic ideas from robust optimization and prove
convergence results for the most common machine learning methods including
logistic regression, decision trees, random forests, and neural networks. Our
experiments show that our method can efficiently generate globally optimal
robust CEs for a variety of common data sets and classification models